The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
样本分配在现代对象检测方法中起着重要的作用。但是,大多数现有的方法都依靠手动设计来分配正 /负样本,这些样本并未明确建立样本分配和对象检测性能之间的关系。在这项工作中,我们提出了一种基于高参数搜索的新型动态样本分配方案。我们首先将分配给每个地面真理的正样本的数量定义为超参数,并采用替代优化算法来得出最佳选择。然后,我们设计一个动态的样本分配过程,以动态选择每个训练迭代中的最佳阳性数量。实验表明,所得的HPS-DET在不同对象检测基线的基线上带来了改善的性能。此外,我们分析了在不同数据集之间和不同骨架之间转移的高参数可重复使用性,以进行对象检测,这表现出我们方法的优势和多功能性。
translated by 谷歌翻译
图像BERT使用掩盖图像建模(MIM)预训练成为应对自我监督的表示学习的一种流行实践。一项开创性的作品将MIM作为一个视觉词汇作为分类任务,将连续的视觉信号用于离散的视觉令牌,并使用预先学习的DVAE将其标记为离散的视觉令牌。尽管有可行的解决方案,但不当离散化仍阻碍了图像预训练的进一步改善。由于图像离散化没有基本真相的答案,因此我们认为,即使可以获得更好的``令牌''',也不应使用唯一的令牌ID分配蒙面的补丁。在这项工作中,我们引入了改进的BERT风格图像预训练方法,即MC-BEIT,该方法执行MIM代理任务,以放松和精致的多选择培训目标。 Specifically, the multi-choice supervision for the masked image patches is formed by the soft probability vectors of the discrete token ids, which are predicted by the off-the-shelf image ``tokenizer'' and further refined by high-level inter-补丁感知诉诸于观察到类似的补丁应该分享其选择。关于分类,分割和检测任务的广泛实验证明了我们方法的优势,例如,预先培训的VIT-B在Imagenet-1K分类上达到了84.1%的TOP-1微调精度,49.2%AP^B和44.0%对象检测和可可的实例分割的AP^m,在ADE20K语义分段上为50.8%,表现优于竞争性对应物。该代码将在https://github.com/lixiaotong97/mc-beit上找到。
translated by 谷歌翻译
现代单细胞流量和质量细胞仪技术测量血液或组织样品中单个细胞的几种蛋白质的表达。因此,每个分析的生物样品都由数十万个多维细胞特征向量表示,这会产生高计算成本,以预测每个生物样品与机器学习模型的相关表型。如此大的固定基础性也限制了机器学习模型的可解释性,因为难以跟踪每个单个单个细胞如何影响最终预测。我们建议使用内核平均嵌入来编码每个分类生物样品的细胞景观。尽管我们最重要的目标是制作一个更透明的模型,但我们发现我们的方法与通过简单的线性分类器相比,您的方法获得了可比性或更好的精度。结果,我们的模型包含很少的参数,但仍与具有数百万参数的深度学习模型相似。与深度学习方法相反,我们模型的线性和子选择步骤使解释分类结果变得容易。分析进一步表明,我们的方法可以接受丰富的生物学解释性,以将细胞异质性与临床表型联系起来。
translated by 谷歌翻译
由于CT相关的X射线辐射对患者的潜在健康风险,LDCT在医学成像领域引起了重大关注。然而,减少辐射剂量会降低重建图像的质量,从而损害了诊断性能。已经引入了各种深度学习技术来通过去噪提高LDCT图像的图像质量。基于GANS的去噪方法通常利用额外的分类网络,即鉴别者,学习被去噪和正常剂量图像之间最辨别的差异,因此相应地规范脱景模型;它通常侧重于全球结构或本地细节。为了更好地规范LDCT去噪模式,本文提出了一种新的方法,被称为Du-GaN,该方法利用GANS框架中的U-Net基于鉴别者来学习两种图像中的去噪和正常剂量图像之间的全局和局部差异渐变域。这种基于U-Net的鉴别器的优点是它不仅可以通过U-Net的输出向去噪网络提供每个像素反馈,而且还通过中间层专注于语义层中的全局结构U-net。除了图像域中的对抗性训练之外,我们还应用于图像梯度域中的另一个基于U-Net的鉴别器,以减轻由光子饥饿引起的伪像并增强去噪CT图像的边缘。此外,Cutmix技术使基于U-Net的鉴别器的每个像素输出能够提供具有置信度图的放射科学家以可视化去噪结果的不确定性,促进基于LDCT的筛选和诊断。关于模拟和现实世界数据集的广泛实验在定性和定量上展示了最近发表的方法的优越性。
translated by 谷歌翻译
语义细分是自动驾驶汽车和机器人中的场景理解的重要任务,旨在为图像中的所有像素分配密集标签。现有工作通常通过在目标数据集上探索不同的网络架构来提高语义分割性能。由于不同数据集的固有分布移位,通过同时从多个数据集同时学习,已经支付了很少的注意。在本文中,我们提出了一种简单,灵活,一般的语义分割方法,称为交叉数据集协作学习(CDCL)。我们的目标是通过利用来自所有数据集的信息来培训统一的模型来提高每个数据集中的性能。具体来说,我们首先将一个数据集感知块(DAB)作为网络的基本计算单元推出,这有助于在不同的数据集中捕获均匀的卷积表示和异构统计数据。其次,我们提供了一个数据集交替培训(DAT)机制,以促进协作优化程序。我们对自动驾驶的多样性分割数据集进行广泛的评估。实验表明,我们的方法始终如一地实现了对现有的单数据集和交叉数据集训练方法的显着改进,而不会引入额外的拖鞋。特别是,具有相同的PSPNet(Reset-18)的架构,我们的方法分别在CityScapes,BDD100K,Camvid的验证组上以5.65 \%,6.57 \%和5.79 \%Miou表示。我们还将CDCL应用于点云3D语义分割,实现了改进的性能,进一步验证了我们方法的优势和一般性。代码和模型将被释放。
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ large-scale text-video dataset for fine-tuning. However, such paradigm is computationally expensive. Humans have the amazing ability to learn new visual concepts from just one single exemplar. We hereby study a new T2V generation problem$\unicode{x2014}$One-Shot Video Generation, where only a single text-video pair is presented for training an open-domain T2V generator. Intuitively, we propose to adapt the T2I diffusion model pretrained on massive image data for T2V generation. We make two key observations: 1) T2I models are able to generate images that align well with the verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we propose Tune-A-Video with a tailored Sparse-Causal Attention, which generates videos from text prompts via an efficient one-shot tuning of pretrained T2I diffusion models. Tune-A-Video is capable of producing temporally-coherent videos over various applications such as change of subject or background, attribute editing, style transfer, demonstrating the versatility and effectiveness of our method.
translated by 谷歌翻译
The recurrent structure is a prevalent framework for the task of video super-resolution, which models the temporal dependency between frames via hidden states. When applied to real-world scenarios with unknown and complex degradations, hidden states tend to contain unpleasant artifacts and propagate them to restored frames. In this circumstance, our analyses show that such artifacts can be largely alleviated when the hidden state is replaced with a cleaner counterpart. Based on the observations, we propose a Hidden State Attention (HSA) module to mitigate artifacts in real-world video super-resolution. Specifically, we first adopt various cheap filters to produce a hidden state pool. For example, Gaussian blur filters are for smoothing artifacts while sharpening filters are for enhancing details. To aggregate a new hidden state that contains fewer artifacts from the hidden state pool, we devise a Selective Cross Attention (SCA) module, in which the attention between input features and each hidden state is calculated. Equipped with HSA, our proposed method, namely FastRealVSR, is able to achieve 2x speedup while obtaining better performance than Real-BasicVSR. Codes will be available at https://github.com/TencentARC/FastRealVSR
translated by 谷歌翻译